32 research outputs found

    Consent Verification Under Evolving Privacy Policies

    Get PDF

    Goal models for acceptance requirements analysis and gamification design

    Get PDF
    The success of software systems highly depends on user engagement. Thus, to deliver engaging systems, software has to be designed carefully taking into account Acceptance Requirements, such as “70% of users will use the system”, and the psychological factors that could influence users to use the system. Analysis can then consider mechanisms that affect these factors, such as Gamification (making a game out of system use), advertising, incentives and more. We propose a Systematic Acceptance Requirements Analysis Framework based on Gamification for supporting the requirements engineer in analyzing and designing engaging software systems. Our framework, named Agon, encompasses both a methodology and a meta-model capturing acceptance and gamification knowledge. In this paper, we describe the Agon Meta-Model and provide examples from the gamification of a decision-making platform in the context of a European Project

    Gamification solutions for software acceptance: a comparative study of requirements engineering and organizational behavior techniques.

    Get PDF
    Gamificationis a powerful paradigm and a set of best practices used to motivate people carrying out a variety of ICT–mediated tasks. Designing gamification solutions and applying them to a given ICT system is a complex and expensive process (in time, competences and money) as software engineers have to cope with heterogeneous stakeholder requirements on one hand, and Acceptance Requirements on the other, that together ensure effective user participation and a high level of system utilization. As such, gamification solutions require significant analysis and design as well as suitable supporting tools and techniques. In this work, we compare concepts, tools and techniques for gamification design drawn from Software Engineering and Human and Organizational Behaviors. We conduct a comparison by applying both techniques to the specific Meeting Scheduling exemplar used extensively in the Requirements Engineering literature

    Goal-oriented requirements engineering: an extended systematic mapping study.

    Get PDF
    Over the last two decades, much attention has been paid to the area of goal-oriented requirements engineering (GORE), where goals are used as a useful conceptualization to elicit, model, and analyze requirements, capturing alternatives and conflicts. Goal modeling has been adapted and applied to many sub-topics within requirements engineering (RE) and beyond, such as agent orientation, aspect orientation, business intelligence, model-driven development, and security. Despite extensive efforts in this field, the RE community lacks a recent, general systematic literature review of the area. In this work, we present a systematic mapping study, covering the 246 top-cited GORE-related conference and journal papers, according to Scopus. Our literature map addresses several research questions: we classify the types of papers (e.g., proposals, formalizations, meta-studies), look at the presence of evaluation, the topics covered (e.g., security, agents, scenarios), frameworks used, venues, citations, author networks, and overall publication numbers. For most questions, we evaluate trends over time. Our findings show a proliferation of papers with new ideas and few citations, with a small number of authors and papers dominating citations; however, there is a slight rise in papers which build upon past work (implementations, integrations, and extensions). We see a rise in papers concerning adaptation/variability/evolution and a slight rise in case studies. Overall, interest in GORE has increased. We use our analysis results to make recommendations concerning future GORE research and make our data publicly available

    Identifying Conflicts in Security Requirements with STS-ml

    Get PDF
    Requirements are conflicting when there exist no system that satisfies them all. Conflicts often originate from clashing needs of different stakeholders. Security requirements are no exception to the rule; moreover, their violation leads to severe consequences, such as privacy infringement, which, in many countries, implies burdensome monetary sanctions. In large (security) requirements models, conflicts are hard or impossible to identify manually. In these cases, automated reasoning is necessary. In this paper, we propose a reasoning framework to detect conflicting security requirements as well as conflicts between security requirements and business policies. Our framework formalises the STS-ml requirements modelling language for socio-technical systems. These systems consist of mutually interdependent humans, organisations, and software. In addition to presenting the framework, we apply the it to a case study about e-Government, and we report on promising scalability results of our implementation

    Security Requirements Engineering via Commitments

    Get PDF
    Security Requirements Engineering (SRE) is concerned with the identification of security needs and the specification of security requirements of the system-to-be. Mainstream approaches to SRE either focus on technical security mechanisms or suggest high-level organizational abstractions that are hard to map to the actual design. Social commitments are a simple yet powerful abstraction to model social interactions and can be used effectively to specify security requirements. In this paper, we build on our previous work proposing a novel goal-oriented modelling language called SecCo—Security via Commitments—where the concept of social commitment between social and technical actors is adopted to specify security requirements. Commitments enable the development of robust applications, wherein security needs are satisfied by assigning contractual validity to interactions

    STS: a Security Requirements Engineering methodology for socio-technical Systems

    Get PDF
    Today’s software systems are situated within larger socio-technical systems, wherein they interact — by exchanging data and delegating tasks — with other technical components, humans, and organisations. The components (actors) of a socio-technical system are autonomous and loosely controllable. Therefore, when interacting, they may endanger security by, for example, disclosing confidential information, breaking the integrity of others’ data, and relying on untrusted third parties, among others. The design of a secure software system cannot disregard its collocation within a socio-technical context, where security is threatened not only by technical attacks, but also by social and organisational threats. This thesis proposes a tool-supported model-driven methodology, namely STS, for conducting security requirements engineering for socio-technical systems. In STS, security requirements are specified — using the STS-ml requirements modelling language — as social contracts that constrain the social interactions and the responsibilities of the actors in the socio-technical system. A particular feature of STS-ml is that it clearly distinguishes information from its representation — in terms of documents, and separates information flow from the permissions or prohibitions actors specify to others over their interactions. This separation allows STS-ml to support a rich set of security requirements. The requirements models of STS-ml have a formal semantics which enables automated reasoning for detecting possible conflicts among security requirements as well as conflicts between security requirements and actors’ business policies — how they intend to achieve their objectives. Importantly, automated reasoning techniques are proposed to calculate the impact of social threats on actors’ information and their objectives. Modelling and reasoning capabilities are supported by STS-Tool. The effectiveness of STS methodology in modelling, and ultimately specifying security requirements for various socio-technical systems, is validated with the help of case studies from different domains. We assess the scalability for the implementation of the conflict identification algorithms conducting a scalability study using data from one of the case studies. Finally, we report on the results from user-oriented empirical evaluations of the STS methodology, the STS-ml modelling language, and the STS-Tool. These studies have been conducted over the past three years starting from the initial proposal of the methodology, language, and tool, in order to improve them after each evaluation
    corecore